Practical Inlining of Functions with Free Variables
نویسندگان
چکیده
A long-standing practical challenge in the optimization of higherorder languages is inlining functions with free variables. Inlining code statically at a function call site is safe if the compiler can guarantee that the free variables have the same bindings at the inlining point as they do at the point where the function is bound as a closure (code and free variables). There have been many attempts to create a heuristic to check this correctness condition, from Shivers’ kCFA-based reflow analysis to Might’s ∆CFA and anodization, but all of those have performance unsuitable for practical compiler implementations. In practice, modern language implementations rely on a series of tricks to capture some common cases (e.g., closures whose free variables are only top-level identifiers such as +) and rely on hand-inlining by the programmer for anything more complicated. This work provides the first practical, general approach for inlining functions with free variables. We also provide a proof of correctness, an evaluation of both the execution time and performance impact of this optimization, and some tips and tricks for implementing an efficient and precise control-flow analysis.
منابع مشابه
Lambda-Splitting: AHigher-Order Approach to Cross-hloduleO ptimizations
We describe an algorithm for automatic inline expansion exroes module boundaries that works in the presence of higher-order functions and free variables; it rearrangesbindings and scopes M necessaryto move nonexpansive code from one module to another. We describe-and implement—the algorithm as transformations on A-calculus. Our inliner interacts well with separate compilation and is efficient, ...
متن کاملLambda-Splitting: A Higher-Order Appro Princeton University CS-TR-537-96 ach to Cross-Module Optimizations
We describe an algorithm for automatic inline expansion across module boundaries that works in the presence of higher-order functions and free variables; it rearranges bindings and scopes as necessary to move nonexpansive code from one module to another. We describe— and implement—the algorithm as transformations on calculus. Our inliner interacts well with separate compilation and is efficient...
متن کاملBytecode Closures
This article describes a new project to study the memory performance of three new implementation strategies for closures coined bytecode closures. The project proposes to compare the new implementation strategies to the classical strategy that dynamically allocates flat closures as heap data structures. The new closure representations are based on dynamically creating specialized bytecode inste...
متن کاملSpecialization of Generic Array Accesses After Inlining
We have implemented an optimization that specializes type-generic array accesses after inlining of polymorphic functions in the native-code OCaml compiler. Polymorphic array operations (read and write) in OCaml require runtime type dispatch because of ad hoc memory representations of integer and float arrays. It cannot be removed even after being monomorphized by inlining because the intermedia...
متن کاملGeneralized Boxings, Congruences and Partial Inlining
We present a new technique for optimizing programs, based on data-ow analysis. The initial goal was to provide a simple way to improve on Leroy and Peyton-Jones' techniques for optimizing boxings (allocations) and unboxings ((eld selection) away in strongly-typed functional programs. Our techniques achieve this goal, while not needing types any more (so it applies to Lisp as well as to ML), and...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1306.1919 شماره
صفحات -
تاریخ انتشار 2013